- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0002100001000000
- More
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Esmaeilzadeh, Hadi (4)
-
Mahapatra, Rohan (4)
-
Ahn, Byung Hoon (3)
-
Ghodrati, Soroush (3)
-
Wang, Shu-Ting (3)
-
Xu, Hanyang (3)
-
Alian, Mohammad (2)
-
Kinzer, Sean (2)
-
Kailas, Krishnan (1)
-
Karthikeyan, Lavanya (1)
-
Kim, Dohee (1)
-
Kim, Joon Kyung (1)
-
Mahajan, Divya (1)
-
Mahmoudi, Babak (1)
-
Mamandipoor, Amin (1)
-
Park, Jongse (1)
-
Priebe, Christopher (1)
-
Santhanam, Harsha (1)
-
Sarikhani, Parisa (1)
-
Sharma, Hardik (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Retrieval-augmented generation (RAG) services are rapidly gaining adoption in enterprise settings as they combine information retrieval systems (e.g., databases) with large language models (LLMs) to enhance response generation and reduce hallucinations. By augmenting an LLM’s fixed pre-trained knowledge with real-time information retrieval, RAG enables models to effectively extend their context to large knowledge bases by selectively retrieving only the most relevant information. As a result, RAG provides the effect of dynamic updates to the LLM’s knowledge without requiring expensive and time-consuming retraining. While some deployments keep the entire database in memory, RAG services are increasingly shifting toward persistent storage to accommodate ever-growing knowledge bases, enhance utility, and improve cost-efficiency. However, this transition fundamentally reshapes the system’s performance profile: empirical analysis reveals that the Search & Retrieval phase emerges as the dominant contributor to end-to-end latency. This phase typically involves (1) running a smaller language model to generate query embeddings, (2) executing similarity and relevance checks over varying data structures, and (3) performing frequent, long-latency accesses to persistent storage. To address this triad of challenges, we propose a metamorphic in-storage accelerator architecture that provides the necessary programmability to support diverse RAG algorithms, dynamic data structures, and varying computational patterns. The architecture also supports in-storage execution of smaller language models for query embedding generation while final LLM generation is executed on DGX A100 systems. Experimental results show up to 4.3 × and 1.5 × improvement in end-to-end throughput compared to conventional retrieval pipelines using Xeon CPUs with NVMe storage and A100 GPUs with DRAM, respectively.more » « lessFree, publicly-accessible full text available June 20, 2026
-
Mahapatra, Rohan; Ghodrati, Soroush; Ahn, Byung Hoon; Kinzer, Sean; Wang, Shu-Ting; Xu, Hanyang; Karthikeyan, Lavanya; Sharma, Hardik; Yazdanbakhsh, Amir; Alian, Mohammad; et al (, ACM)
-
Wang, Shu-Ting; Xu, Hanyang; Mamandipoor, Amin; Mahapatra, Rohan; Ahn, Byung Hoon; Ghodrati, Soroush; Kailas, Krishnan; Alian, Mohammad; Esmaeilzadeh, Hadi (, IEEE)
-
Kim, Joon Kyung; Ahn, Byung Hoon; Kinzer, Sean; Ghodrati, Soroush; Mahapatra, Rohan; Yatham, Brahmendra; Wang, Shu-Ting; Kim, Dohee; Sarikhani, Parisa; Mahmoudi, Babak; et al (, IEEE Micro)
An official website of the United States government
